4,312 research outputs found
Recommended from our members
Techniques for the dynamic randomization of network attributes
Critical infrastructure control systems continue to foster predictable communication paths and static configurations that allow easy access to our networked critical infrastructure around the world. This makes them attractive and easy targets for cyber-attack. We have developed technologies that address these attack vectors by automatically reconfiguring network settings. Applying these protective measures will convert control systems into «moving targets» that proactively defend themselves against attack. This «Moving Target Defense» (MTD) revolves about the movement of network reconfiguration, securely communicating reconfiguration specifications to other network nodes as required, and ensuring that connectivity between nodes is uninterrupted. Software-defined Networking (SDN) is leveraged to meet many of these goals. Our MTD approach eliminates adversaries targeting known static attributes of network devices and systems, and consists of the following three techniques: (1) Network Randomization for TCP/UDP Ports; (2) Network Randomization for IP Addresses; (3) Network Randomization for Network Paths In this paper, we describe the implementation of the aforementioned technologies. We also discuss the individual and collective successes for the techniques, challenges for deployment, constraints and assumptions, and the performance implications for each technique
Recommended from our members
Workflow automation in liquid chromatography mass spectrometry
We describe the fully automated workflow path developed for the ingest and analysis of liquid chromatography mass spectrometry (LCMS) data. With the help of this computational workflow, we were able to replace two human work days to analyze data with two hours of unsupervised computation time. In addition, this tool also can compute confidence intervals for all its results, based on the noise level present in the data. We leverage only open source tools and libraries in this workflow
SPARCS: Stream-processing architecture applied in real-time cyber-physical security
In this paper, we showcase a complete, end-To-end, fault tolerant, bandwidth and latency optimized architecture for real time utilization of data from multiple sources that allows the collection, transport, storage, processing, and display of both raw data and analytics. This architecture can be applied for a wide variety of applications ranging from automation/control to monitoring and security. We propose a practical, hierarchical design that allows easy addition and reconfiguration of software and hardware components, while utilizing local processing of data at sensor or field site ('fog computing') level to reduce latency and upstream bandwidth requirements. The system supports multiple fail-safe mechanisms to guarantee the delivery of sensor data. We describe the application of this architecture to cyber-physical security (CPS) by supporting security monitoring of an electric distribution grid, through the collection and analysis of distribution-grid level phasor measurement unit (PMU) data, as well as Supervisory Control And Data Acquisition (SCADA) communication in the control area network
Designed-in security for cyber-physical systems
An expert from academia, one from a cyber-physical system (CPS) provider, and one from an end asset owner and user offer their different perspectives on the meaning and challenges of 'designed-in security.' The academic highlights foundational issues and talks about emerging technology that can help us design and implement secure software in CPSs. The vendor's view includes components of the academic view but emphasizes the secure system development process and the standards that the system must satisfy. The user issues a call to action and offers ideas that will ensure progress
Anomaly Detection for Science DMZs Using System Performance Data
Science DMZs are specialized networks that enable large-scale distributed scientific research, providing efficient and guaranteed performance while transferring large amounts of data at high rates. The high-speed performance of a Science DMZ is made viable via data transfer nodes (DTNs), therefore they are a critical point of failure. DTNs are usually monitored with network intrusion detection systems (NIDS). However, NIDS do not consider system performance data, such as network I/O interrupts and context switches, which can also be useful in revealing anomalous system performance potentially arising due to external network based attacks or insider attacks. In this paper, we demonstrate how system performance metrics can be applied towards securing a DTN in a Science DMZ network. Specifically, we evaluate the effectiveness of system performance data in detecting TCP-SYN flood attacks on a DTN using DBSCAN (a density-based clustering algorithm) for anomaly detection. Our results demonstrate that system interrupts and context switches can be used to successfully detect TCP-SYN floods, suggesting that system performance data could be effective in detecting a variety of attacks not easily detected through network monitoring alone
Big Data and Analysis of Data Transfers for International Research Networks Using NetSage
Modern science is increasingly data-driven and collaborative in nature. Many scientific disciplines, including genomics, high-energy physics, astronomy, and atmospheric science, produce petabytes of data that must be shared with collaborators all over the world. The National Science Foundation-supported International Research Network Connection (IRNC) links have been essential to enabling this collaboration, but as data sharing has increased, so has the amount of information being collected to understand network performance. New capabilities to measure and analyze the performance of international wide-area networks are essential to ensure end-users are able to take full advantage of such infrastructure for their big data applications. NetSage is a project to develop a unified, open, privacy-aware network measurement, and visualization service to address the needs of monitoring today's high-speed international research networks. NetSage collects data on both backbone links and exchange points, which can be as much as 1Tb per month. This puts a significant strain on hardware, not only in terms storage needs to hold multi-year historical data, but also in terms of processor and memory needs to analyze the data to understand network behaviors. This paper addresses the basic NetSage architecture, its current data collection and archiving approach, and details the constraints of dealing with this big data problem of handling vast amounts of monitoring data, while providing useful, extensible visualization to end users
Resolving the Unexpected in Elections: Election Officials\u27 Options
This paper seeks to assist election officials and their lawyers in effectively handling the technical issues that can be difficult to understand and analyze, allowing them to protect themselves and the public interest from unfair accusations, inaccuracies in results, and conspiracy theories. The paper helps to empower officials to recognize which types of voting system events and indicators need a more structured analysis and what steps to take to set up the evaluations (or forensic assessments) using computer experts
Recommended from our members
Phasor Measurement Units Optimal Placement and Performance Limits for Fault Localization
In this paper, the performance limits of faults localization are investigated using synchrophasor data. The focus is on a non-trivial operating regime where the number of Phasor Measurement Unit (PMU) sensors available is insufficient to have full observability of the grid state. Proposed analysis uses the Kullback Leibler (KL) divergence between the distributions corresponding to different fault location hypotheses associated with the observation model. This analysis shows that the most likely locations are concentrated in clusters of buses more tightly connected to the actual fault site akin to graph communities. Consequently, a PMU placement strategy is derived that achieves a near-optimal resolution for localizing faults for a given number of sensors. The problem is also analyzed from the perspective of sampling a graph signal, and how the placement of the PMUs i.e. the spatial sampling pattern and the topological characteristic of the grid affect the ability to successfully localize faults. To highlight the superior performance of presented fault localization and placement algorithms, the proposed strategy is applied to a modified IEEE 34, IEEE-123 bus test cases and to data from a real distribution grid. Additionally, the detection of cyber-physical attacks is also examined where PMU data and relevant Supervisory Control and Data Acquisition (SCADA) network traffic information are compared to determine if a network breach has affected the integrity of the system information and/or operations
- …